Generalized Winner-Relaxing Kohonen Self-Organizing Feature Maps

نویسنده

  • Jens Christian Claussen
چکیده

We calculate analytically the magnification behaviour of a generalized family of self-organizing feature maps inspired by a variant introduced by Kohonen in 1991, denoted here as Winner Relaxing Kohonen algorithm, which is shown here to have a magnification exponent of 4/7. Motivated by the observation that a modification of the learning rule for the winner neuron influences the magnification law, we introduce a generalization entitled as Generalized Winner Relaxing Kohonen algorithm that reaches a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. Between both variants a parameter allows to interpolate, and the unmodified Kohonen map appears as special case inbetween both variants. Compared to the original Self-Organizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement. As the brain is assumed to be optimized by evolution for information processing, one would postulate that maximal mutual information is a sound principle governing the setup of neural structures. For feedforward neural structures with lateral inhibition, an algorithm of maximal mutual information has been defined by Linsker (Linsker 1989) using the gradient descend in mutual information. It requires computationally costly integrations, and has a highly nonlocal learning rule and therefore is less favourable as a model for biological maps and less feasible for technical applications. However, both biological network structures and technical applications are (due to realization constraints) not necessarily capable of reaching this optimum. This remains a question under discussion especially for the brain (Plumbley 1999). Even if one had quantitative experimental measurements of the magnification behaviour, the question from what self-organizing dynamics the neural structure emerged remains. So overall it is desirable to formulate other learning rules that minimize mutual information in a simpler way. The self-organizing feature map algorithm proposed by in 1982 by Kohonen (Kohonen 1982) has become a successful model for topology preserving primary sensory processing in the cortex (Obermayer et. al. 1992), and an useful tool in technical applications (Ritter et. al. 1992). Self Organizing Feature Maps map an input space, such as the retina or skin receptor fields, into a neural layer by feedforward structures with

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner

The magnification behaviour of a generalized family of self-organizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the one-dimensional case, which can be obtained analytically. The Winner-Enhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory....

متن کامل

Winner-Relaxing Self-Organizing Maps

A new family of self-organizing maps, the Winner-Relaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version allows to steer the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional ...

متن کامل

Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen Feature Maps

Self-Organizing Maps are models for unsupervised representation formation of cortical receptor fields by stimuli-driven self-organization in laterally coupled winner-take-all feedforward structures. This paper discusses modifications of the original Kohonen model that were motivated by a potential function, in their ability to set up a neural mapping of maximal mutual information. Enhancing the...

متن کامل

Towards an Information Density Measure for Neural Feature Maps

Many neural models have been suggested for the development of feature maps in cortical areas. Undoubtedly the most popular model is the Kohonen self-organizing map (SOM). Once the map has been learned, this network uses a competitive winner-take-all (WTA) approach to choose a singlèbest' output neuron on a (typically) 2D grid for each presented input pattern. Cortical maps in biological organis...

متن کامل

A Scalable Digital Architecture of a Kohonen Neural Network

Kohonen self-organizing feature maps are unsupervised learning neural networks that categorize or classify data. Efficient hardware implementation of such neural networks requires the definition of a certain number of simplifications to the original algorithm. In particular, multiplications should be avoided by means of simplifications in the distance metric, the neighborhood function and the l...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره cond-mat/0208414  شماره 

صفحات  -

تاریخ انتشار 2002